In recent years, deep-learning-based approaches have been introduced to solving time-series forecasting-related problems. These novel methods have demonstrated impressive performance in univariate and low-dimensional multivariate time-series forecasting tasks. However, when these novel methods are used to handle high-dimensional multivariate forecasting problems, their performance is highly restricted by a practical training time and a reasonable GPU memory configuration. In this paper, inspired by a change of basis in the Hilbert space, we propose a flexible data feature extraction technique that excels in high-dimensional multivariate forecasting tasks. Our approach was originally developed for the National Science Foundation (NSF) Algorithms for Threat Detection (ATD) 2022 Challenge. Implemented using the attention mechanism and Convolutional Neural Networks (CNN) architecture, our method demonstrates great performance and compatibility. Our models trained on the GDELT Dataset finished 1st and 2nd places in the ATD sprint series and hold promise for other datasets for time series forecasting.
translated by 谷歌翻译
In this work we introduce reinforcement learning techniques for solving lexicographic multi-objective problems. These are problems that involve multiple reward signals, and where the goal is to learn a policy that maximises the first reward signal, and subject to this constraint also maximises the second reward signal, and so on. We present a family of both action-value and policy gradient algorithms that can be used to solve such problems, and prove that they converge to policies that are lexicographically optimal. We evaluate the scalability and performance of these algorithms empirically, demonstrating their practical applicability. As a more specific application, we show how our algorithms can be used to impose safety constraints on the behaviour of an agent, and compare their performance in this context with that of other constrained reinforcement learning algorithms.
translated by 谷歌翻译
Overfitting is a problem in Convolutional Neural Networks (CNN) that causes poor generalization of models on unseen data. To remediate this problem, many new and diverse data augmentation methods (DA) have been proposed to supplement or generate more training data, and thereby increase its quality. In this work, we propose a new data augmentation algorithm: VoronoiPatches (VP). We primarily utilize non-linear recombination of information within an image, fragmenting and occluding small information patches. Unlike other DA methods, VP uses small convex polygon-shaped patches in a random layout to transport information around within an image. Sudden transitions created between patches and the original image can, optionally, be smoothed. In our experiments, VP outperformed current DA methods regarding model variance and overfitting tendencies. We demonstrate data augmentation utilizing non-linear re-combination of information within images, and non-orthogonal shapes and structures improves CNN model robustness on unseen data.
translated by 谷歌翻译
Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used.
translated by 谷歌翻译
We develop a wall model for large-eddy simulation (LES) that takes into account various pressure-gradient effects using multi-agent reinforcement learning (MARL). The model is trained using low-Reynolds-number flow over periodic hills with agents distributed on the wall along the computational grid points. The model utilizes a wall eddy-viscosity formulation as the boundary condition, which is shown to provide better predictions of the mean velocity field, rather than the typical wall-shear stress formulation. Each agent receives states based on local instantaneous flow quantities at an off-wall location, computes a reward based on the estimated wall-shear stress, and provides an action to update the wall eddy viscosity at each time step. The trained wall model is validated in wall-modeled LES (WMLES) of flow over periodic hills at higher Reynolds numbers, and the results show the effectiveness of the model on flow with pressure gradients. The analysis of the trained model indicates that the model is capable of distinguishing between the various pressure gradient regimes present in the flow.
translated by 谷歌翻译
当国家行动对具有等效的奖励和过渡动态时,动物能够从有限的经验中迅速推断出来。另一方面,现代的强化学习系统必须通过反复试验进行艰苦的学习,以使国家行动对相当于价值 - 需要从其环境中进行过多的大量样本。已经提出了MDP同态,将观察到的环境的MDP降低到抽象的MDP,这可以实现更有效的样本策略学习。因此,当可以先验地构建合适的MDP同构时,已经实现了样本效率的令人印象深刻的提高 - 通常是通过利用执业者对环境对称性的知识来实现​​的。我们提出了一种在离散作用空间中构建同态的新方法,该方法使用部分环境动力学模型来推断哪种状态作用对导致同一状态 - 将状态行动空间的大小减少了一个等于动作空间的基数。我们称此方法等效效果抽象。在GridWorld环境中,我们从经验上证明了等效效果抽象可以提高基于模型的方法的无模型设置和计划效率的样品效率。此外,我们在Cartpole上表明,我们的方法的表现优于学习同构的现有方法,同时使用33倍的培训数据。
translated by 谷歌翻译
腿部机器人运动是一项艰巨的任务,这是由于无数的子问题,例如脚接触的混合动力学以及所需步态对地形的影响。对浮动基础和脚关节的准确和高效的状态估计可以通过向机器人控制器提供反馈信息来帮助减轻这些问题的许多问题。当前的状态估计方法高度依赖于视觉和惯性测量的结合,以提供实时估计,从而在感知上较差的环境中残障。在这项工作中,我们表明,通过通过因子图公式利用机器人的运动学链模型,我们可以使用主要的特性惯性数据对基础和腿关节进行状态估计。我们使用基于因子图形的框架中的预先集成IMU测量,正向运动计算和接触检测的组合进行状态估计,从而使我们的状态估计值受到机器人模型的约束。模拟和硬件上的实验结果表明,我们的方法平均超过当前的本体感受状态估计方法27%,同时可以推广到各种腿部机器人平台。我们在各种轨迹上定量和定性地展示了我们的结果。
translated by 谷歌翻译
在试图在为人类建立的世界中执行有用任务的类人形机器人时,我们解决了自主运动的问题。人形机器人计划和控制算法在崎rough地形上行走的算法变得越来越有能力。同时,市售的深度摄像机已经变得越来越准确,而GPU计算已成为AI研究中的主要工具。在本文中,我们提出了一个新建造的行为控制系统,用于实现快速,自主,两足步行,而无需暂停或审议。我们使用最近发表的快速平面区域感知算法,基于高度图的身体路径计划器,A*脚步计划器和基于动量的步行控制器来实现这一目标。我们将这些元素放在一起,形成一个由现代软件开发实践和仿真工具支持的行为控制系统。
translated by 谷歌翻译
结构性因果模型(SCM)提供了一种原则方法,可以从经济学到医学的学科中的观察和实验数据中识别因果关系。但是,通常以图形模型表示的SCM不仅可以依靠数据,而要支持域知识的支持。在这种情况下,一个关键的挑战是缺乏以系统的方式将先验(背景知识)编码为因果模型的方法学框架。我们提出了一个称为因果知识层次结构(CKH)的抽象,用于将先验编码为因果模型。我们的方法基于医学中“证据水平”的基础,重点是对因果信息的信心。使用CKH,我们提出了一个方法学框架,用于编码来自各种信息源的因果研究,并将它们组合起来以得出SCM。我们在模拟数据集上评估了我们的方法,并与敏感性分析的地面真实因果模型相比,证明了整体性能。
translated by 谷歌翻译
本文旨在帮助构建与大规模语言模型(LMS)相关的风险景观。为了促进负责任的创新的进步,需要深入了解这些模型提出的潜在风险。详细分析了广泛的建立和预期的风险,借鉴了计算机科学,语言学和社会科学的多学科专业知识和文学。我们概述了六个具体风险领域:I.歧视,排除和毒性,II。信息危害,III。误导危害,V.恶意用途,V.人机互动危害,vi。自动化,访问和环境危害。第一个领域涉及陈规定型,不公平歧视,排他性规范,有毒语言和LMS社会群体的绩效。第二个重点侧重于私有数据泄漏或LMS正确推断敏感信息的风险。第三次解决贫困,虚假或误导性信息的风险,包括在敏感域中,以及敲门式风险,如共享信息的信任侵蚀。第四次考虑了试图使用LMS造成伤害的行动者的风险。第五部分侧重于用于支持与人类用户互动的会话代理的LLMS特异性的风险,包括不安全使用,操纵或欺骗。第六六探讨了对不同社会群体或社区可能产生不同影响的环境危害,工作自动化和其他挑战的风险。总的来说,我们审查了21个风险。我们讨论了不同风险的起源点和指向潜在的缓解方法。最后,我们讨论在实施减轻的组织职责,以及协作和参与的作用。我们强调了进一步研究的方向,特别是在扩展工具包时,用于评估和评估LMS中的概述风险。
translated by 谷歌翻译